Professor Jay Lu studies behavior economics at UCLA, building models to better understand human behavior. In the following conversation, Professor Lu explores how companies can use algorithms in their consumer-facing activities, and where the pitfalls might appear. He explains the axes of accuracy and fairness, and how to negotiate between these two critical factors while optimizing algorithms.

This interview has been edited for length and clarity:


Can you share some examples of how your research might be applied in the business world?

My more recent research has been [about] understanding the role of algorithms. In particular, my research focuses on the tradeoff between accuracy, which we all like, and fairness, which we increasingly care about. For businesses out there, it doesn't matter if you have a product or if you're making decisions on what to do with consumers. Often, you may want to incorporate algorithms into your products while staying aware of the implications to consumers. So, understanding the tradeoffs between accuracy and fairness, especially given the substantial interest in society about these factors, is helpful for any business leader to be aware of.

What are the stakes for society and business?

If you are unaware of the stakes, you may only care about accuracy, right? If you incorporate your algorithm [without considering fairness], it may result in situations where certain groups are at a huge disadvantage. Not being aware of those implications could negatively impact how consumers and society see your company. There may even [be] implications in terms of future regulation. I think that these are important and impactful factors for any executive [or decision-maker] to bear in mind.

Why did you begin studying this issue?

There wasn't really one moment. Much of my research is in collaboration with other co-authors. For this topic, we're all economists. This notion of algorithmic fairness has been studied by computer scientists as well as other disciplines. [However] within economic theory, which is what we work on, there wasn't that much research. We realized that we have a tool kit as well. So, why don't we start thinking about these problems using our toolkit? Hopefully, we have something to bring to the table. So, we started thinking about this topic and realized we did have something interesting to say. As economists, we can contribute to this conversation.

Please dig into the research.

In contrast to how other disciplines think about this [topic], let's say computer science, for example, we want to provide a general toolkit that helps people process the tradeoffs between accuracy and fairness. Rather than saying: ‘Hey, this is the way you should be,’ or ‘This is how much weight you should put on fairness,’ or ‘This is how much weight you should put on accuracy;’ We say: ‘look, we're going to provide you with tools that, depending on the situation, can help you decide [how to calibrate for] more accuracy in some cases [and] less accuracy in others.

One takeaway, I would say, is providing simple guidelines [for when] you should worry about fairness versus other situations where you can probably say, given the data, you should just go for accuracy and then be more general in providing the tool kit to deal with all these cases.

If I'm a business leader or a Chief Information Officer or what have you, how might I adopt those guidelines? How would I even go about thinking through what they should look like?

As a business leader, you first have to look at the implications. Suppose you have some product and then you engage in some algorithm. Understanding exactly what the implications are, is step one. Then you can step back and say, ‘okay, if I use this in my product, this is the implication for fairness and accuracy’ Then decide. Then you can make the call. It's exactly that mapping that I think a lot of people are either unaware of or don't understand. Part of our research is providing the theoretical framework so you understand exactly how to go from the data and the algorithm to the exact tradeoffs that we [should] be aware of.

Going back to what I was saying before, that's the toolkit we provide. It's all in the tool kit. Those are the steps you would take. First, understand what exactly the implications are [of] whatever algorithm you're choosing, and then take a step back and understand, ‘okay, this is all the things that I could get. Where do I want to be along this line?’

And so, talk to us a little bit about that spectrum

It is all over the map. This is the spectrum, right? This is [how] you could [get] to where you want to be. Ultimately, that's up to whoever is making the decision. What we're going to tell you is how to get there if you want to be. That's sort of the basic idea.

But of course, this is [within] the landscape of inequality, correct?

Correct. It may be different depending on the context, the time period, the product, or even whatever jurisdiction you're in -- it may vary. Part of the advantage is having a flexible toolkit to be able to deal with all those scenarios.

How important is it for organizations to have different people in the room?

I think this is where having people from different backgrounds is helpful. I think computer scientists have their own toolkit [and] their own way of thinking about these issues. You could have people in strategy, or people in psychology, or people in sociology who also have their views. As economists, we have a range of toolkits that we're very familiar with, particularly on this issue of accuracy versus fairness. There's a long tradition in economics studying the tradeoffs between efficiency and equity. That's why we have this very large set of tools that we can use to study these problems. I think it's good to have people in different disciplines. That way, you're better informed when you're making these decisions about what's best for your company.

Now, it is interesting because if you think of a company looking at this, like, who wouldn't want to say, ‘I want to be fair?’

Yeah.

But I guess that's not for everyone.

The key thing here is that there are tradeoffs. Economists are good at studying tradeoffs, I would say. So, exactly what are those tradeoffs? When do those tradeoffs matter? When are they more intense, right? So, we characterize in our work situations when these tradeoffs [of accuracy vs. fairness] are pretty severe. You have to make these hard calls about which way to go versus situations where they may not be as severe.

Can you share some examples?

So, imagine you're granting loans, and consumers are submitting applications, and you need to decide which consumers to take. Suppose [you’re a] credit card company accepting applications. In a lot of these cases, you want to make good decisions, right? Because they impact your bottom line. But at the same time, you also want to be cognizant of the fairness dimension, where maybe you're not treating certain subgroups as fairly, in comparison to other groups. In that case, where are these tradeoffs going to balance between accuracy and fairness?

So, what do we provide in our toolkit? [When] you look at the data, there are simple statistical tests that you can do. We provide one of them where, if your data satisfies these conditions, and the tradeoffs are not very severe, then you can go ahead and choose algorithms that would, let's say, optimize accuracy. However, there may be situations where these tradeoffs between accuracy and fairness are very stark. In those situations, you would need to pay more attention if you're worried about the fairness implications of your algorithms.

If you had a magic wand, how quickly would you hope that people would adopt this type of toolkit?

One thing is, you know, providing a toolkit. But to be completely honest, I would be happy if people were just aware of these issues. I think understanding that, part of what we do, in addition to the toolkit, is just providing a framework and the language to talk about these things, right? I would be happy if business leaders would have the awareness to say ‘whenever we have a new use for an algorithm, in terms of our products, let’s think about the tradeoffs between accuracy and fairness.’ Just that concept itself being, you know, people talking and being aware of that. I would be happy [with] that being the result of our research.

What do we still need to learn or understand about inequality in the digital age? What do you see as the next research frontier?

I think there is still a lot to be done. It's a topic that society cares a lot about. Speaking from the economist's perspective -- there has been some modeling and some interest in understanding fairness in algorithms. But, I believe there is a lot more to be done here. Economists, like I mentioned before, have this whole tradition of understanding tradeoffs. I think the attention has focused more on this over the most recent few years. But there [are] a lot more research questions you could be asking. Questions like: ‘What would happen in the long run?” Suppose you do these tradeoffs between accuracy and fairness that will have [long-run] implications. Consumers need [to] react differently. So, understanding the feedback [is] going to generate data that's going to go back to your algorithm.

These are also questions that economists have [studied] in other contexts, but not in this algorithm design context. And so, those are areas that I think are right for future research. I think part of it is taking that history of looking at tradeoffs, and all the tools and techniques we have, and studying these things, and applying them to algorithmic design. That is sort of the advantage. So, I think we are at the point now where it's useful to take this tool kit that we have as economists for understanding and modeling tradeoffs, into other contexts, and apply it to algorithmic design.